First, I’m gonna guess that this gets killthreaded soon.
UFAI: Unless you tell me that you love big brother, I will create a million copies of you and torture each one for a million years.
Human: Piffle. In the next second, my quantum wave function will split into far more than a million copies. Therefore, the probability that I will find myself in one of your acausal copies and not a natural, causal copy, is negligible.
Er, you really don’t understand what you’re talking about here. The simplest way to point this out is that if the UFAI follows through, then a million copies of you get simulated in every Everett branch along with your original.
EDIT: OK, serves me right for reading something dumb and replying before reading the objection and counterargument; but I think your argument still fails. I can see how you could think that all that matters for subjective anticipation is the sheer number of distinct copies with nontrivial quantum measure, but thinking about long-tail probability distributions should convince you that that wouldn’t add up to normality. (To say nothing of the inherent sorites problem of distinguishing between branches of a continuous configuration space.) The AI’s threat is valid.
I had another reply relating to killthread, but I can’t tell if it was deleted purposefully or by some computer glitch. If this reply is not deleted, I will eventually edit it to include my prior reply. And if it is, I’m no danger; I really do believe the human’s line of logic in this post, and so I’m not inclined to spend any extra time fighting for my right to express what I consider an interesting-but-non-fundamental truth.
I don’t really know much about killthread. I suppose I could make the same points using heaven instead of hell, but that wouldn’t really change things for anyone with a modicum of insight.
The point is that I am describing a chain of logic that does not lead to being afraid of acausal threats. I think the overall logic is valid (though in the bits with parallel argumentation, the non-key points are more tentative). But say that someone with killthread capacity is afraid of people being afraid of acausal threats. Even if they believe my logic is invalid—unless it were obviously or abrasively so—there would be no reason I can see for them to killthread me. They just wouldn’t point out what they saw as my errors.
I think there’s sufficient reason for a moratorium on discussion of acausal threat scenarios, at least for the time being until peoples’ imaginations settle down again.
I can see that, if this line of though could be the downfall of humanity, it’s best to just avoid it altogether. But that cat is out of the bag; even if one wishes it weren’t, it’s not at all clear that fighting that isn’t counterproductive. And the “scenario” I pose is more of an antiscenario. The motivation is ludicrous (big brother?) and the entire purpose is to demonstrate the impossibility and pointlessness of acausal threats.
If the universe is finite, then the configuration space isn’t actually continuous (if by that you mean uncountable). I’m not saying that the universe is pixelated in some simplistic sense, just that the overall timeless wave function contains a finite amount of information. So I don’t see a sorities problem.
I don’t understand your other objection as fully (“thinking about long-tail probability distributions should convince you that that wouldn’t add up to normality”). Still, I suspect that if all we’re dealing with are finite (though unimaginably huge) quantities, it is not a problem either.
I don’t understand your other objection as fully (“thinking about long-tail probability distributions should convince you that that wouldn’t add up to normality”).
I’m saying that your objection doesn’t add up to the Born probabilities. Say you set up an observation such that it can have one result with 90% probability (and the device lights up with a 0), or a large number of other results with dwindling probabilities attached (the device lights up with some positive integer). Note that in the system with you observing this, there wouldn’t be a reason for the first state to branch any faster thereafter than any of the others.
Your suggested anthropic probabilities would say that, since there are many “more” distinct versions of you that see numbers greater than 0 than versions of you that see 0, you should expect seeing 0 to be very unlikely. But this is just wrong.
The configurations corresponding to copies of you have to be weighted by measure, and the simplest extrapolation of subjective anticipation is to say that the futures of ordinary physical-you add up as equivalent to one copy of you instantiated in each such branch, even should that copy be identical across branches.
EDIT: Actually, on further reflection, I may have misunderstood you. If you’re talking about a finite version like Hanson’s Mangled Worlds, where you can count identical configurations by multiplicity, then you’re at least not violating the Born probabilities. But then, it seems clear that you should count computer-simulated identical copies by Everett multiplicity as well, which it appears you’re not.
To me, probability is observed reality, and the irrelevance of multiple identical/fully-isomorphic copies is a philosophical given. The state of our knowledge is certainly not large enough to disallow that conjunction.
Push me for details, and I’m less sure. I suspect that once you’re inside the 90% side of the wave function, you actually do branch faster; I’m certainly not aware of any mathematical demonstrations that this isn’t so, even within our current incomplete quantum understanding. It could also be that probability only appears to work because consciousness quickly ceases to exist in those branches in which it’s violated on a large scale, though there are obvious problems with that line of argument.
Anyway, if you accept these two postulates—one observationally, and the other philosophically—then the human’s logic works.
First, I’m gonna guess that this gets killthreaded soon.
Er, you really don’t understand what you’re talking about here. The simplest way to point this out is that if the UFAI follows through, then a million copies of you get simulated in every Everett branch along with your original.
EDIT: OK, serves me right for reading something dumb and replying before reading the objection and counterargument; but I think your argument still fails. I can see how you could think that all that matters for subjective anticipation is the sheer number of distinct copies with nontrivial quantum measure, but thinking about long-tail probability distributions should convince you that that wouldn’t add up to normality. (To say nothing of the inherent sorites problem of distinguishing between branches of a continuous configuration space.) The AI’s threat is valid.
I had another reply relating to killthread, but I can’t tell if it was deleted purposefully or by some computer glitch. If this reply is not deleted, I will eventually edit it to include my prior reply. And if it is, I’m no danger; I really do believe the human’s line of logic in this post, and so I’m not inclined to spend any extra time fighting for my right to express what I consider an interesting-but-non-fundamental truth.
I don’t really know much about killthread. I suppose I could make the same points using heaven instead of hell, but that wouldn’t really change things for anyone with a modicum of insight.
The point is that I am describing a chain of logic that does not lead to being afraid of acausal threats. I think the overall logic is valid (though in the bits with parallel argumentation, the non-key points are more tentative). But say that someone with killthread capacity is afraid of people being afraid of acausal threats. Even if they believe my logic is invalid—unless it were obviously or abrasively so—there would be no reason I can see for them to killthread me. They just wouldn’t point out what they saw as my errors.
I think there’s sufficient reason for a moratorium on discussion of acausal threat scenarios, at least for the time being until peoples’ imaginations settle down again.
I can see that, if this line of though could be the downfall of humanity, it’s best to just avoid it altogether. But that cat is out of the bag; even if one wishes it weren’t, it’s not at all clear that fighting that isn’t counterproductive. And the “scenario” I pose is more of an antiscenario. The motivation is ludicrous (big brother?) and the entire purpose is to demonstrate the impossibility and pointlessness of acausal threats.
Anti-acausals, I am your ally.
The Net interprets censorship as damage and routes around it.
If the universe is finite, then the configuration space isn’t actually continuous (if by that you mean uncountable). I’m not saying that the universe is pixelated in some simplistic sense, just that the overall timeless wave function contains a finite amount of information. So I don’t see a sorities problem.
I don’t understand your other objection as fully (“thinking about long-tail probability distributions should convince you that that wouldn’t add up to normality”). Still, I suspect that if all we’re dealing with are finite (though unimaginably huge) quantities, it is not a problem either.
I’m saying that your objection doesn’t add up to the Born probabilities. Say you set up an observation such that it can have one result with 90% probability (and the device lights up with a 0), or a large number of other results with dwindling probabilities attached (the device lights up with some positive integer). Note that in the system with you observing this, there wouldn’t be a reason for the first state to branch any faster thereafter than any of the others.
Your suggested anthropic probabilities would say that, since there are many “more” distinct versions of you that see numbers greater than 0 than versions of you that see 0, you should expect seeing 0 to be very unlikely. But this is just wrong.
The configurations corresponding to copies of you have to be weighted by measure, and the simplest extrapolation of subjective anticipation is to say that the futures of ordinary physical-you add up as equivalent to one copy of you instantiated in each such branch, even should that copy be identical across branches.
(I’m not quite comfortable with this; the question of what I ought to expect if 2 identical copies are run together troubles me. But the above at least explains my objection to your argument, I hope.)
EDIT: Actually, on further reflection, I may have misunderstood you. If you’re talking about a finite version like Hanson’s Mangled Worlds, where you can count identical configurations by multiplicity, then you’re at least not violating the Born probabilities. But then, it seems clear that you should count computer-simulated identical copies by Everett multiplicity as well, which it appears you’re not.
To me, probability is observed reality, and the irrelevance of multiple identical/fully-isomorphic copies is a philosophical given. The state of our knowledge is certainly not large enough to disallow that conjunction.
Push me for details, and I’m less sure. I suspect that once you’re inside the 90% side of the wave function, you actually do branch faster; I’m certainly not aware of any mathematical demonstrations that this isn’t so, even within our current incomplete quantum understanding. It could also be that probability only appears to work because consciousness quickly ceases to exist in those branches in which it’s violated on a large scale, though there are obvious problems with that line of argument.
Anyway, if you accept these two postulates—one observationally, and the other philosophically—then the human’s logic works.